Skip to content

Conversation

@alexander-alderman-webb
Copy link
Contributor

@alexander-alderman-webb alexander-alderman-webb commented Nov 7, 2025

Description

According to the user in the issue gemini can return null values for chat completion text. Verify that that the messages are not null before calling model_dump().

See the corresponding schema: https://platform.openai.com/docs/api-reference/chat/object

Issues

Closes #5071

Reminders

@codecov
Copy link

codecov bot commented Nov 7, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 83.92%. Comparing base (720440e) to head (a9b0d86).
⚠️ Report is 4 commits behind head on master.
✅ All tests successful. No failed tests found.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5081      +/-   ##
==========================================
- Coverage   83.97%   83.92%   -0.06%     
==========================================
  Files         179      179              
  Lines       17951    17951              
  Branches     3194     3194              
==========================================
- Hits        15075    15065      -10     
- Misses       1905     1913       +8     
- Partials      971      973       +2     
Files with missing lines Coverage Δ
sentry_sdk/integrations/openai.py 84.84% <100.00%> (ø)

... and 3 files with indirect coverage changes

@alexander-alderman-webb alexander-alderman-webb marked this pull request as ready for review November 7, 2025 12:07
@alexander-alderman-webb alexander-alderman-webb requested a review from a team as a code owner November 7, 2025 12:07
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Nulls Undermine Token Usage Calculation

The fix for null choice.message values is incomplete. While the list comprehension on lines 246-250 now filters out null messages, the _calculate_token_usage function still only checks hasattr(choice, "message") without verifying the message is not None. This causes count_tokens to be called with None, likely triggering an error when tiktoken_encoding.encode_ordinary() is invoked.

sentry_sdk/integrations/openai.py#L155-L159

output_tokens += count_tokens(message)
elif hasattr(response, "choices"):
for choice in response.choices:
if hasattr(choice, "message"):
output_tokens += count_tokens(choice.message)

Fix in Cursor Fix in Web


@alexander-alderman-webb alexander-alderman-webb merged commit c747043 into master Nov 11, 2025
130 checks passed
@alexander-alderman-webb alexander-alderman-webb deleted the webb/check-null-openai branch November 11, 2025 09:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

OpenAI integration fails when response has no messages in choices.

3 participants